79 research outputs found

    A Novel Weight-Shared Multi-Stage CNN for Scale Robustness

    Get PDF
    Convolutional neural networks (CNNs) have demonstrated remarkable results in image classification for benchmark tasks and practical applications. The CNNs with deeper architectures have achieved even higher performance recently thanks to their robustness to the parallel shift of objects in images as well as their numerous parameters and the resulting high expression ability. However, CNNs have a limited robustness to other geometric transformations such as scaling and rotation. This limits the performance improvement of the deep CNNs, but there is no established solution. This study focuses on scale transformation and proposes a network architecture called the weight-shared multi-stage network (WSMS-Net), which consists of multiple stages of CNNs. The proposed WSMS-Net is easily combined with existing deep CNNs such as ResNet and DenseNet and enables them to acquire robustness to object scaling. Experimental results on the CIFAR-10, CIFAR-100, and ImageNet datasets demonstrate that existing deep CNNs combined with the proposed WSMS-Net achieve higher accuracies for image classification tasks with only a minor increase in the number of parameters and computation time.Comment: accepted version, 13 page

    Content-based Image Classification via Visual Learning

    Get PDF

    Detecting Vital Documents in Massive Data Streams

    Get PDF
    Existing knowledge bases, includingWikipedia, are typically written and maintained by a group of voluntary editors. Meanwhile, numerous web documents are being published partly due to the popularization of online news and social media. Some of the web documents, called "vital documents", contain novel information that should be taken into account in updating articles of the knowledge bases. However, it is practically impossible for the editors to manually monitor all the relevant web documents. Consequently, there is a considerable time lag between an edit to knowledge base and the publication dates of such vital documents. This paper proposes a realtime detection framework of web documents containing novel information flowing in massive document streams. The framework consists of twostep filter using statistical language models. Further, the framework is implemented on the distributed and faulttolerant realtime computation system, Apache Storm, in order to process the large number of web documents. On a publicly available web document data set, the TREC KBA Stream Corpus, the validity of the proposed framework is demonstrated in terms of the detection performance and processing time

    Data Augmentation using Random Image Cropping and Patching for Deep CNNs

    Get PDF
    Deep convolutional neural networks (CNNs) have achieved remarkable results in image processing tasks. However, their high expression ability risks overfitting. Consequently, data augmentation techniques have been proposed to prevent overfitting while enriching datasets. Recent CNN architectures with more parameters are rendering traditional data augmentation techniques insufficient. In this study, we propose a new data augmentation technique called random image cropping and patching (RICAP) which randomly crops four images and patches them to create a new training image. Moreover, RICAP mixes the class labels of the four images, resulting in an advantage similar to label smoothing. We evaluated RICAP with current state-of-the-art CNNs (e.g., the shake-shake regularization model) by comparison with competitive data augmentation techniques such as cutout and mixup. RICAP achieves a new state-of-the-art test error of 2.19%2.19\% on CIFAR-10. We also confirmed that deep CNNs with RICAP achieve better results on classification tasks using CIFAR-100 and ImageNet and an image-caption retrieval task using Microsoft COCO.Comment: accepted version, 16 page

    Student Modelling for ICAI in View of Cognitive Science : Process Driven Model Inference Method

    Get PDF
    本論文では,学生の問題解決過程および誤りに対する認知的競点からの考察に基づいて,学生モデルの記述法であるプロセスモデルと,プロセスモデルで記述された学生モデルを生成するプロセス駆動型モデル推論法を提案する.更に,生成された学生モデルを用いることによって可能となる誤りに対する指導を検討する.プロセスモデルは,学生の知識運用の過程を表現しており,これを用いることにより学生の知識運用上の誤りを表現することができる.知識自体の誤りは,運用の誤りの固定化としてとらえることができる.プロセス駆動型モデル推論法は,問題解決過程のプロセスモデルを摂動することにより,その問題の解決過程で発生する知識運用の誤りをモデル化する.知識運用の誤りの原因は,モデルの生成過程で加えられた摂動により説明できる.本学生モデルを用いることにより,知識運用の誤りに対して,運用の誤りの原因を指摘し,知識を正しく運用するように誘導する指導が可能となる.この指導は.知識自体の誤りに対しては,誤った知識の発生原因に対する指導となっている

    A Method for Generating Program Specification from Source Program: Analysis by Transforming Program Structure and Argument Manipulation

    Get PDF
    本論文では,ソースプログラムからプログラム仕様を自動生成するための変形解析法を提案し,この手法を実装したシステムAPSG/I(Automatic Program Specification Generator I)について述べている.変形解析法では,Prologのリスト処理プログラムの類似性に注目し,類推の枠組みを利用することによって,典型的なリスト処理プログラムとその仕様から与えられたプログラムに対する仕様の生成を行っている.まず,プログラムが入力されると,あらかじめ定義した典型的なプログラムと比較して,①命令の有無や実行順序などで規定される構造,②データの受け渡しなどで規定される引数の組合せ,の2点について差異を求める.次に,求めた差異によって,あらかじめ定義したプログラム仕様を変換し,入力されたプログラムの仕様を生成する.変形解析法で生成される仕様は,①プログラム仕様と,仕様を変換する規則を自然語で定義しているため自然言語文になる,②形式が統一され,かつ細部に関する情報についても十分に含んでいるなどの特徴を有している.本研究の一部は,文部省科学研究費(重点領域研(2)02249204)による
    corecore